Convex formulation for multi-task L1-, L2-, and LS-SVMs

نویسندگان

چکیده

Quite often a machine learning problem lends itself to be split in several well-defined subproblems, or tasks. The goal of Multi-Task Learning (MTL) is leverage the joint from two different perspectives: on one hand, single, overall model, and other hand task-specific models. In this way, found solution by MTL may better than those either common Starting with work Evgeniou et al., support vector machines (SVMs) have lent themselves naturally approach. This paper proposes convex formulation for L1-, L2- LS-SVM models that results dual problems quite similar single-task ones, but multi-task kernels; turn, makes possible train using standard solvers. As an alternative approach, direct optimal combination already trained can also considered. paper, procedure compute combining parameter respect four error functions derived. shown experimentally, proposed approach performs generally combination, both them are straight use

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Clustered Multi-Task Learning: A Convex Formulation

In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they...

متن کامل

α and β Stability for Additively Regularized LS-SVMs via Convex Optimization

This paper considers the design of an algorithm that maximizes explicitly its own stability. The stability criterion as often used for the construction of bounds on the generalization error of a learning algorithm is proposed to compensate for overfitting. The primal-dual formulation characterizing Least Squares Support Vector Machines (LS-SVMs) and the additive regularization framework [13] ar...

متن کامل

A Convex Formulation for Learning Task Relationships in Multi-Task Learning

Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning....

متن کامل

Sparse LS-SVMs with L0 - norm minimization

Least-Squares Support Vector Machines (LS-SVMs) have been successfully applied in many classification and regression tasks. Their main drawback is the lack of sparseness of the final models. Thus, a procedure to sparsify LS-SVMs is a frequent desideratum. In this paper, we adapt to the LS-SVM case a recent work for sparsifying classical SVM classifiers, which is based on an iterative approximat...

متن کامل

A Convex Feature Learning Formulation for Latent Task Structure Discovery

This paper considers the multi-task learning problem and in the setting where some relevant features could be shared across few related tasks. Most of the existing methods assume the extent to which the given tasks are related or share a common feature space to be known apriori. In real-world applications however, it is desirable to automatically discover the groups of related tasks that share ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neurocomputing

سال: 2021

ISSN: ['0925-2312', '1872-8286']

DOI: https://doi.org/10.1016/j.neucom.2021.01.137